14 research outputs found

    Modernización del sistema de seguridad y Control de equipo por medio de transponders: Desarrollo del software de administración del inventario general del hospital

    No full text
    Tesis (Ingeniería Biomédica), Instituto Politécnico Nacional, UPIBI, 2009, 1 archivo PDF, (97 páginas). tesis.ipn.m

    Space-time pattern recognition in electrophysiological signals from evoked potentials using dynamic neural networks

    No full text
    Durante siglos la humanidad ha tratado de descifrar al cerebro, tecnologías como el EEG son capaces de registrar la actividad eléctrica del cerebro [67]. La popularidad del EEG en la investigación científica es debido a la gran cantidad de estudios y aplicación de sus registros, también por la portabilidad y el costo del equipo. Los derivados de la técnica de EEG incluyen los potenciales evocados ( PE), que implica un promedio de la actividad EEG en tiempo, mediante la medición de la presencia de un estímulo que puede ser de tipo visual, somatosensorial o auditivo [67] Está claro que la lectura de la respuesta cerebral no representa un reto, sin embargo, el problema principal se centra en la decodificación respuesta cerebral [49]. Hay muchos algoritmos de decodificación se han propuesto para esta decodificación, opciones van desde estimadores lineales óptimos, hasta varias versiones de decodificadores bayesiano, e incluso diferentes topologías de redes neurales (NN). Una de los más limitaciones más reiterada de las técnicas de decodificación del cerebro es, de hecho, los tratamientos previos que se aplican a las señales de EEG [56], [64]. Se ha establecido que hay una enorme cantidad de información que se ha perdido de la señal sin procesar. Tratando de evitar está perdida, hay algunos algoritmos que trabajan con la señal sin procesar pero su decodificación de la señal es lento y con bajas tasas de clasificación correcta. En este momento, la mayoría de los algoritmos de interpretación automática de EEG se enmarcan en la denominada teoría de reconocimiento de patrones [8]. Las NN se han aplicado con éxito en diversas áreas de clasificación de patrones [15],[38],[93]. Diferentes tipos de NN estáticas se han utilizado para la decodificación EEG [98], [127], la mayoría de sus aplicaciones en diagnóstico de condiciones cerebrales, tales como la epilepsia, el autismo, el Alzheimer, la degeneración del cerebro tejido, trastornos del sueño, entre otros. Pero está claro que este tipo de NN deprecia el carácter continuo de la señal electrofisiológica. En esta tesis se propone la aplicación de Redes Neuronales Dinámicas (DNNs), con el fin de tener en cuenta la naturaleza continua de las señales de EEG, las DNNs son conocidas por ser más complejas en su implementación que NN estáticas, pero también son más avanzadas. Debido al hecho de que los datos se almacenan y se elaboran en el tiempo. Las entradas no son independientes, por otra parte están interactuando e influyen entre sí. Se proponen cuatro topologías que siguen esta estructura de aprendizaje. En primer lugar, una red neuronal recurrente (RNN) que analizan una versión discretizada de señales de EEG con un período de muestreo fijo. A continuación, una red neuronal diferencial (DfNN) que se aplica para clasificar señales de EEG obtenidas de diferentes bases de datos que se consideran en esta tesis. También los fundamentos para las aplicaciones en hardware de esta DfNN, primero mediante su implementación en un Field Programmable Gate Arrays (FPGAs), posteriormente en un Very-Large-Scale Integración (VLSI) y finalmente en circuitos analógicos. A continuación, una red neuronal con retardos en el tiempo (TDNN) se prueba con las mismas señales de EEG, una TDNN es una especie de DNN que tienen la ventaja de tomar en cuenta la información anterior de la misma señal electrofisiológica. Y finalmente una red neuronal compleja (CVNN) que permite la clasificación basada en el análisis de frecuencia de la señal. A continuación, una red neuronal con retardos en el tiempo (TDNN) se prueba con las mismas señales de EEG, una TDNN es una especie de DNN que tienen la ventaja de tomar en cuenta la información anterior de la misma señal electrofisiológica. Y finalmente una red neuronal compleja (CVNN) que permite la clasificación basada en el análisis de frecuencia de la señal. Por todas las NN en software que trabajan con la base de datos tomada de [97], su correcta clasificación se validó mediante el método de validación cruzada -k . Lograr un promedio de clasificación correcta por encima del 90 %. En el caso de las NN en hardware el enfoque es diferente para cada caso, en el la FPGA implementó un DfNN y se generó una interfaz entre la PC y el FPGA. Parte de la ocupación código VHDL para la aplicación FPGA se utilizó para el VLSI , en este caso el chip no se probó debido al tiempo de la fabricación , pero el diseño fue aprobado para su manufactura. Para la red neuronal analógica es sólo la simulación del circuito. Las tasas de clasificación correcta de software NN están por encima de 5 % a partir de otras obras que emplean la misma base de datos y no aplican ningún preproceso de la señal de EEG. Para el conocimiento de autores que no hay aplicación de la DfNN en hardware haciendo de esto una de las más importantes contribuciones de esta tesis

    Compliant Cross-Axis Joints: A Tailoring Displacement Range Approach via Lattice Flexures and Machine Learning

    No full text
    Compliant joints are flexible elements that allow displacement due to the elastic deformations they experience under the action of external loading. The flexible parts responsible for these displacements are known as flexure hinges. Displacement, or motion range, in compliant joints depends on the stiffness of the flexure hinges and can be tailored through various parameters, including the overall dimensions, the base material, and the distribution within the hinge. Considering the distribution, we propose the stiffness modification of a compliant cross-axis joint via the use of lattice mechanical metamaterials. Due to the wide range of parameters that influence the stiffness of a lattice, different machine learning algorithms (artificial neural networks, support vector machine, and Gaussian progress regression) were proposed to forecast these parameters. Here, the machine learning algorithm with the best forecasting was the Gaussian progress regression; this algorithm has the advantage of well-tuning even with small regression databases, allowing these functions to more easily adjust to suit specific data, even if the dataset is small. Hexagonal, re-entrant, and square lattices were studied as flexure hinges. For each, the effect of the unit cell size and its orientation with respect to the principal axis on the effective stiffness were studied via computational and laboratory experiments on additively manufactured samples. Finite element predictions resulted in good agreement with the experimentally obtained data. As a result, using lattice-flexure hinges led to increments in displacement ranging from double to ten times those obtained with solid hinges. The most suitable machine learning algorithm was the Gaussian progress regression, with a maximum error of 0.12% when compared to the finite element analysis results

    Design of an Aluminum Alloy Using a Neural Network-Based Model

    No full text
    Lightweight materials are in constant progress due to the new requirements of mobility. At the same time, it is mandatory to meet the internal standards of the original equipment manufacturers to guarantee product quality, and market regulations are necessary to reduce or eliminate pollution emissions. In order to reach these technical requirements, the design is optimized, and new materials and alloys are evaluated. The search for these new types of materials is long and expensive. For this search, new technologies have emerged, such as integrated computational materials engineering, which is a valuable tool to forecast through simulation alloy characteristics that meet specific requirements without fabrication. This research develops an artificial neural network to establish the chemical composition of a new aluminum alloy based on the desired manufacturing characteristics as well as fatigue strength. For this, the proposed artificial neural network was trained with the chemical composition of preexisting aluminum-based alloys and the resulting desired mechanical properties. The significant contribution of the proposed research consists not only of the neural network high-performance forecasting but also the fact that for to train and validate it, not only simulations of its responses to the different possibilities of alloys were tried but also validated through an experimental laboratory test performed by uniaxial machine. The proposed artificial neural network results show an average correlation of 99.33% between its forecasting and laboratory testing

    Compliant Cross-Axis Joints: A Tailoring Displacement Range Approach via Lattice Flexures and Machine Learning

    No full text
    Compliant joints are flexible elements that allow displacement due to the elastic deformations they experience under the action of external loading. The flexible parts responsible for these displacements are known as flexure hinges. Displacement, or motion range, in compliant joints depends on the stiffness of the flexure hinges and can be tailored through various parameters, including the overall dimensions, the base material, and the distribution within the hinge. Considering the distribution, we propose the stiffness modification of a compliant cross-axis joint via the use of lattice mechanical metamaterials. Due to the wide range of parameters that influence the stiffness of a lattice, different machine learning algorithms (artificial neural networks, support vector machine, and Gaussian progress regression) were proposed to forecast these parameters. Here, the machine learning algorithm with the best forecasting was the Gaussian progress regression; this algorithm has the advantage of well-tuning even with small regression databases, allowing these functions to more easily adjust to suit specific data, even if the dataset is small. Hexagonal, re-entrant, and square lattices were studied as flexure hinges. For each, the effect of the unit cell size and its orientation with respect to the principal axis on the effective stiffness were studied via computational and laboratory experiments on additively manufactured samples. Finite element predictions resulted in good agreement with the experimentally obtained data. As a result, using lattice-flexure hinges led to increments in displacement ranging from double to ten times those obtained with solid hinges. The most suitable machine learning algorithm was the Gaussian progress regression, with a maximum error of 0.12% when compared to the finite element analysis results

    Design and construction of a quantitative model for the management of technology transfer at the Mexican elementary school system

    Get PDF
    Nowadays, schools in Mexico have financial autonomy to invest in infrastructure, although they must adjust their spending to national education projects. This represents a challenge, since it is complex to predict the effectiveness that an ICT (Information and Communication Technology) project will have in certain areas of the country that do not even have the necessary infrastructure to start up. To address this problem, it is important to provide schools with a System for Technological Management (STM), that allows them to identify, select, acquire, adopt and assimilate technologies. In this paper, the implementation of a quantitative model applied to a STM is presented. The quantitative model employs parameters of schools, regarding basic infrastructure such as essential services, computer devices, and connectivity, among others. The results of the proposed system are presented, where from the 5 possible points for the correct transfer, only 3.07 are obtained, where the highest is close to 0.88 with the availability of electric energy and the lowest is with the internet connectivity and availability with a 0.36 and 0.39 respectively which can strongly condition the success of the program.Hoy en día, las escuelas en México cuentan con autonomía financiera para hacer inversión en infraestrutura, aunque deben ajustar sus gastos a los proyectos nacionales de educación. Esto representa un reto, ya que resulta complejo preveer la efectividad que tendrá un proyecto en TIC (Tecnologías de la Información y la Comunicación) en ciertas zonas del país que ni siquiera cuentan con la infraestructura necesaria para su puesta en marcha. Para abordar este problema, es importante dotar a las escuelas de un Sistema de Gestión Tecnológica (STM) que les permita identificar, seleccionar, adquirir, adoptar y asimilar tecnologías. En este trabajo se presenta la implementación de un modelo cuantitativo aplicado a un STM. El modelo cuantitativo emplea parámetros de escuelas con respecto a infraestructura básica como servicios esenciales, dispositivos informáticos, conectividad, entre otros. Se presentan los resultados del sistema propuesto, donde de los 5 posibles puntos para la transferencia correcta sólo se obtienen 3,07, el más alto es cercano a 0,88 con la disponibilidad de energía eléctrica, y el menor es con la conectividad y disponibilidad de Internet con A 0,36 y 0,39 respectivamente, lo que puede condicionar fuertemente el éxito de los programas

    Design and construction of a quantitative model for the management of technology transfer at the Mexican elementary school system

    No full text
    Nowadays, schools in Mexico have financial autonomy to invest in infrastructure, although they must adjust their spending to national education projects. This represents a challenge, since it is complex to predict the effectiveness that an ICT (Information and Communication Technology) project will have in certain areas of the country that do not even have the necessary infrastructure to start up. To address this problem, it is important to provide schools with a System for Technological Management (STM), that allows them to identify, select, acquire, adopt and assimilate technologies. In this paper, the implementation of a quantitative model applied to a STM is presented. The quantitative model employs parameters of schools, regarding basic infrastructure such as essential services, computer devices, and connectivity, among others. The results of the proposed system are presented, where from the 5 possible points for the correct transfer, only 3.07 are obtained, where the highest is close to 0.88 with the availability of electric energy and the lowest is with the internet connectivity and availability with a 0.36 and 0.39 respectively which can strongly condition the success of the program

    Design and construction of a quantitative model for the management of technology transfer at the Mexican elementary school system

    No full text
    Nowadays, schools in Mexico have financial autonomy to invest in infrastructure, although they must adjust their spending to national education projects. This represents a challenge, since it is complex to predict the effectiveness that an ICT (Information and Communication Technology) project will have in certain areas of the country that do not even have the necessary infrastructure to start up. To address this problem, it is important to provide schools with a System for Technological Management (STM), that allows them to identify, select, acquire, adopt and assimilate technologies. In this paper, the implementation of a quantitative model applied to a STM is presented. The quantitative model employs parameters of schools, regarding basic infrastructure such as essential services, computer devices, and connectivity, among others. The results of the proposed system are presented, where from the 5 possible points for the correct transfer, only 3.07 are obtained, where the highest is close to 0.88 with the availability of electric energy and the lowest is with the internet connectivity and availability with a 0.36 and 0.39 respectively which can strongly condition the success of the program.Hoy en día, las escuelas en México cuentan con autonomía financiera para hacer inversión en infraestrutura, aunque deben ajustar sus gastos a los proyectos nacionales de educación. Esto representa un reto, ya que resulta complejo preveer la efectividad que tendrá un proyecto en TIC (Tecnologías de la Información y la Comunicación) en ciertas zonas del país que ni siquiera cuentan con la infraestructura necesaria para su puesta en marcha. Para abordar este problema, es importante dotar a las escuelas de un Sistema de Gestión Tecnológica (STM) que les permita identificar, seleccionar, adquirir, adoptar y asimilar tecnologías. En este trabajo se presenta la implementación de un modelo cuantitativo aplicado a un STM. El modelo cuantitativo emplea parámetros de escuelas con respecto a infraestructura básica como servicios esenciales, dispositivos informáticos, conectividad, entre otros. Se presentan los resultados del sistema propuesto, donde de los 5 posibles puntos para la transferencia correcta sólo se obtienen 3,07, el más alto es cercano a 0,88 con la disponibilidad de energía eléctrica, y el menor es con la conectividad y disponibilidad de Internet con A 0,36 y 0,39 respectivamente, lo que puede condicionar fuertemente el éxito de los programas

    Long short-term memory based modeling of heat treatment and trigger mechanism effect on thin-walled aluminum 6063 T5 for crashworthiness

    No full text
    ABSTRACTVehicle safety relies on the capacity of vehicle systems to decrease the likelihood of biomechanical injuries to both vehicle occupants and pedestrians. This can take the form of active or passive measures applied to the materials and their geometry in the event of an impact. Nevertheless, due to the nonlinear behaviour of materials and deformation, a distinct collapse process emerges that requires analysis; this analysis can be virtual or through experimental tests. While observed performance can be ascertained through experimental designs, executing such a design for every parameter combination can be time-consuming and costly. This study employs a Long Short-Term Memory (LSTM) to predict the energy absorbed and crushing force of thin-walled aluminium 6063 T5 tubes subject to different collapse triggers and heat treatments. The LSTM model is constructed using experimental data derived from experiments that consider trigger shape, area, trigger position, furnace duration, cooling temperature, and heat treatment soaking method. The LSTM model achieved a Root Mean Square Error (RSME) of 0.56 and 0.0025 for crushing force and energy absorption, respectively. LSTM proves to be a valuable tool for predicting results in nonlinear analysis, particularly in the context of crush behaviour. Also, a comparison of the LSTM and finite element analysis (FEA) predictive performance is presented

    Hand Movement Classification Using Burg Reflection Coefficients

    No full text
    Classification of electromyographic signals has a wide range of applications, from clinical diagnosis of different muscular diseases to biomedical engineering, where their use as input for the control of prosthetic devices has become a hot topic of research. The challenge of classifying these signals relies on the accuracy of the proposed algorithm and the possibility of its implementation in hardware. This paper considers the problem of electromyography signal classification, solved with the proposed signal processing and feature extraction stages, with the focus lying on the signal model and time domain characteristics for better classification accuracy. The proposal considers a simple preprocessing technique that produces signals suitable for feature extraction and the Burg reflection coefficients to form learning and classification patterns. These coefficients yield a competitive classification rate compared to the time domain features used. Sometimes, the feature extraction from electromyographic signals has shown that the procedure can omit less useful traits for machine learning models. Using feature selection algorithms provides a higher classification performance with as few traits as possible. The algorithms achieved a high classification rate up to 100% with low pattern dimensionality, with other kinds of uncorrelated attributes for hand movement identification
    corecore